Background-click supervision for temporal action localization
Abstract
Weakly supervised temporal action localization aims at learning the instance-level action pattern from the video-level labels, where a challenge is action-context confusion. To overcome this challenge, one recent work builds an action-click supervision framework. It requires similar annotation costs but can steadily improve the localization performance when compared to the conventional weakly supervised methods.
In this paper, we find a stronger action localizer can be trained with the same annotation costs if the labels are annotated on the background video frames, because the performance bottleneck of the existing approaches mainly comes from the background errors.
Methodology
To this end, we convert the action-click supervision to the background-click supervision and develop a novel method, called BackTAL. BackTAL implements two-fold modeling on the background video frames:
1. Position Modeling: We not only conduct supervised learning on the annotated video frames but also design a score separation module to enlarge the score differences between the potential action frames and backgrounds.
2. Feature Modeling: We propose an affinity module to measure frame-specific similarities among neighboring frames and dynamically attend to informative neighbors when calculating temporal convolution.
The key insight is that by focusing on background frames rather than action frames, we can more effectively reduce false positives and improve the overall localization accuracy.
Experimental Results
Experiments on three benchmarks demonstrate the high performance of our BackTAL.
The results show that:
• Background-click supervision is more effective than action-click supervision with the same annotation costs
• The two-fold modeling approach (position and feature modeling) effectively reduces background errors
• The score separation module successfully enlarges the discrimination between action and background frames
• The affinity module effectively captures temporal dependencies and improves feature representations